Adaptive Main Memory Compression

نویسندگان

  • Irina Chihaia Tuduce
  • Thomas R. Gross
چکیده

Computer pioneers correctly predicted that programmers would want unlimited amounts of fast memory. Since fast memory is expensive, an economical solution to that desire is a memory hierarchy organized into several levels. Each level has greater capacity than the preceding but it is less quickly accessible. The goal of the memory hierarchy is to provide a memory system with cost almost as low as the cheapest level of memory and speed almost as fast as the fastest level. In the last decades, the processor performance improved much faster than the performance of the memory levels. The memory hierarchy proved to be a scalable solution, i.e., the bigger the perfomance gap between processor and memory, the more levels are used in the memory hierarchy. For instance, in 1980 microprocessors were often designed without caches, while in 2005 most of them come with two levels of caches on the chip. Since the fast upper memory levels are small, programs with poor locality tend to access data from the lower levels of the memory hierarchy. Therefore, these programs run slower than programs with good locality. The sizes of all memory levels have increased continuously. Following this trend, the applications would fit in higher (and faster) memory levels. However, the application developers have even more agresively increased their memory demands. Applications with large memory requirements and poor locality are becoming increasingly popular, as people attend to solve large problems (e.g., network simulators, traffic simulators, model checking, databases). Given the technology and application trends, efficiently executing large applications on a hierarchy of memories remains a challenge. Given that the fast memory levels are close to the processor, bringing the working set of an application closer to the processor tends to improve the performance of the application. An approach to bringing the application’s data closer to the processor is compressing one of the existing memory levels. This approach is becoming increasigly attractive as processors become faster and more cycles can be dedicated to (de)compressing data. This thesis provides an example of efficiently designing and implementing a compressedmemory system. We choose to investigate compression at the main memory level (RAM) because the management of this level is done in software (thus allowing for rapid prototyping and validation). Although our design and implementation are specific to main memory compression, the concepts described are general and can be applied to any level in the memory hierarchy. The key idea of main memory compression is to set aside part of main memory to hold compressed data. By compressing some of the data space, the effective memory size available to the applications is made larger and disk accesses are avoided. One of the thorny issues is that sizing the region that holds compressed data is difficult and if not done right (i.e., the region is too large or too small) memory compression slows down the application. There are two claims that make up the core of the thesis. First, the thesis shows that it is possible to implement main memory compression in an efficient way, meaning that while

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A General-Purpose Compression Scheme for Databases

Current adaptive compression schemes such as gzip and compress are impractical for database compression as they do not allow random-access to individual records. The sequitur scheme of Nevill-Manning and Witten also adaptively compresses data, achieving excellent compression but with signiicant main-memory requirements. A preliminary version of sequitur used a semi-static modeling approach to a...

متن کامل

Lossless Frame Memory Compression with Low Complexity using PCT and AGR for Efficient High Resolution Video Processing

This paper propose a lossless frame memory compression algorithm with low complexity based on the photo core transform (PCT) with adaptive prediction and adaptive Golomb-Rice (AGR) coding for high resolution video applications. The main goal of the proposed algorithm is to reduce the amount of frame memory in video codec losslessly for high resolution video applications. The proposed method has...

متن کامل

A fast by D . J . Craft hardware data compression algorithm and some algorithmic extensions

hardware data compression algorithm and some algorithmic extensions This paper reports on work at IBM's Austin and Burlington laboratories concerning fast hardware implementations of general-purpose lossless data compression algorithms, particularly for use in enhancing the data capacity of computer storage devices or systems, and transmission data rates for networking or telecommunications cha...

متن کامل

Different Transforms for Image Compression

The main objective of this paper is to focus on three techniques of image compression Burrows Wheeler Transform (BWT), Discrete Cosine Transform (DCT), and Discrete Wavelet Transform (DWT). Image processing systems can encode raw images with different degrees of precision, achieving varying levels of compression. Different encoders with different compression ratios can be built and used for dif...

متن کامل

Compressing XML Documents with Finite State Automata

We propose a scheme for automatically generating compressors for XML documents from Document Type Definition(DTD) specifications. Our algorithm is a lossless adaptive algorithm where the model used for compression and decompression is generated automatically from the DTD, and is used in conjunction with an arithmetic compressor to produce a compressed version of the document. The structure of t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005